Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1100620230100040354
Clinical and Experimental Emergency Medicine
2023 Volume.10 No. 4 p.354 ~ p.362
Explainable artificial intelligence in emergency medicine: an overview
Yohei Okada

Yilin Ning
Marcus Eng Hock Ong
Abstract
Artificial intelligence (AI) and machine learning (ML) have potential to revolutionize emergency medical care by enhancing triage systems, improving diagnostic accuracy, refining prognostication, and optimizing various aspects of clinical care. However, as clinicians often lack AI expertise, they might perceive AI as a ¡°black box,¡± leading to trust issues. To address this, ¡°explainable AI,¡± which teaches AI functionalities to end-users, is important. This review presents the definitions, importance, and role of explainable AI, as well as potential challenges in emergency medicine. First, we introduce the terms explainability, interpretability, and transparency of AI models. These terms sound similar but have different roles in discussion of AI. Second, we indicate that explainable AI is required in clinical settings for reasons of justification, control, improvement, and discovery and provide examples. Third, we describe three major categories of explainability: pre-modeling explainability, interpretable models, and post-modeling explainability and present examples (especially for post-modeling explainability), such as visualization, simplification, text justification, and feature relevance. Last, we show the challenges of implementing AI and ML models in clinical settings and highlight the importance of collaboration between clinicians, developers, and researchers. This paper summarizes the concept of ¡°explainable AI¡± for emergency medicine clinicians. This review may help clinicians understand explainable AI in emergency contexts.
KEYWORD
Artificial intelligence, Machine learning, Resuscitation, Emergency medicine
FullTexts / Linksout information
Listed journal information